VRSpeech is a simple application that illustrates ways of using the QuickTime VR API. VRSpeech uses Apple's Speech Recognition Manager to allow voice navigation of a panorama or object node. For instance, the user can navigate within a node by uttering commands like "pan left 90 degrees," "zoom in," "zoom out," and the like. The complete vocabulary is defined in the file LMSpeech.h.
The Speech Recognition Manager is available on Developer CDs and via the Web, at http://www.speech.apple.com. In addition, there are several good articles in develop magazine, Issue 27 (September 1996) that explain how to integrate the SRM with your application.
Note that there is only a PPC version of this sample application. The Speech Recognition Manager is not yet available on 68K Macintoshes or on Windows computers.
VRSpeech is built on top of QTShell; it directly uses the MacFramework.c file of QTShell. Other source files in QTShell are simply replaced. The file VRSpeech.c contains the bulk of the speech recognition processing.
The file LMSpeech.rsrc contains a resource that holds the language model of words and phrases that we want to listen for. That resource was built using the SRLanguageModeler utility developed by the Speech Recognition group. You can modify the language model (defined in the file LMSpeech.h) and generate a new resource, if you wish.